Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: TAS assignment error #3937

Merged
merged 4 commits into from
Jan 14, 2025
Merged

Conversation

kerthcet
Copy link
Contributor

@kerthcet kerthcet commented Jan 7, 2025

What type of PR is this?

/kind bug

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #3887

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Fix building TAS assignments for workloads with multiple PodSets (eg. JobSet or kubeflow Jobs). The assignment was computed independently for the PodSets which could result in conflicts rendering the pods unschedulable by the kube-scheduler.

@k8s-ci-robot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. release-note-none Denotes a PR that doesn't merit a release note. kind/bug Categorizes issue or PR as related to a bug. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jan 7, 2025
@kerthcet
Copy link
Contributor Author

kerthcet commented Jan 7, 2025

/test

@k8s-ci-robot
Copy link
Contributor

@kerthcet: The /test command needs one or more targets.
The following commands are available to trigger required jobs:

/test pull-kueue-build-image-main
/test pull-kueue-test-e2e-main-1-29
/test pull-kueue-test-e2e-main-1-30
/test pull-kueue-test-e2e-main-1-31
/test pull-kueue-test-integration-main
/test pull-kueue-test-multikueue-e2e-main
/test pull-kueue-test-scheduling-perf-main
/test pull-kueue-test-tas-e2e-main
/test pull-kueue-test-unit-main
/test pull-kueue-verify-main

Use /test all to run all jobs.

In response to this:

/test

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Jan 7, 2025
@kerthcet
Copy link
Contributor Author

kerthcet commented Jan 7, 2025

/test all

Copy link

netlify bot commented Jan 7, 2025

Deploy Preview for kubernetes-sigs-kueue canceled.

Name Link
🔨 Latest commit 3b650c2
🔍 Latest deploy log https://app.netlify.com/sites/kubernetes-sigs-kueue/deploys/678602a4cb1f5600087f8453

Signed-off-by: kerthcet <[email protected]>
@kerthcet
Copy link
Contributor Author

kerthcet commented Jan 7, 2025

/test all

Copy link
Contributor

@mimowo mimowo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a unit test in scheduler_test.go TestScheduleForTAS for the problematic scenario.

for k, v := range singlePodRequest {
usage[k] = v * int64(domain.state)
}
s.addUsage(domain.id, usage)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this will work fine for now as TAS is not combined with cohorts, and so only one workload is considered in each scheduling cycle on the snapshot. However, this will not work well when we start supporting cohorts - in that case usage coming from the assignment phase for a workload that is de-prioritized in a cohort will consume the capacity, but scheduler should operate on "clean" capacity.

One way to go about it in the future would be to remember usage coming from "Inflight" PodSets, and clean it at the end of the assignment phase, before the actual scheduling phase.

Since I don't see it problematic in the current assumptions (no cohorts) I believe a TODO with #3761 is sufficient.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will add tests later.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean when having time, or in a follow up PR? My preference would be to add the test in this PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean when I have time later, have codes without tests is terrible.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I would like to include the fix in 0.10.1, and tentatively planning next week. Do you think you would add the test by then? I can also help to write it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will do this weekend

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have a test that covers a case where request topology is not the lowest level? E.g. the lowest level is hostname and job request (and can only fit in) a block? I have concerns if it's gonna work if the requested level is not the lowest one, since s.addUsage() method add usage only to leaves in the topology tree structure

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense, but I think it still works because we'll traverse down the domains level by level until the lowest ones, which means we'll always get the lowest domains in addUsage(). Added the tests. PTAL.

Copy link
Contributor

@PBundyra PBundyra Jan 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you adjust the test so the topology is made out of three levels. And then add also another podset that should make a workload not fit within requested topology? I am afraid this usage may not be counted if it's not at the lowest level, so this is what I wanted to test

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After second consideration I think, you're actually right and the usage will be tracked correctly. Thanks for adding the test

@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Jan 8, 2025
@k8s-ci-robot k8s-ci-robot added size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Jan 8, 2025
@mimowo mimowo mentioned this pull request Jan 9, 2025
34 tasks
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Jan 13, 2025
@kerthcet
Copy link
Contributor Author

kerthcet commented Jan 13, 2025

Test added, PTAL.
Before it will fail as blow, which proves the bug is fixed.

              				TopologyAssignment: &v1beta1.TopologyAssignment{
              					Levels: {"kubernetes.io/hostname"},
              					Domains: []v1beta1.TopologyDomainAssignment{
              						{
              							Values: {"x1"},
            - 							Count:  7,
            + 							Count:  8,
              						},
              						{
              							Values: {"y1"},
            - 							Count:  8,
            + 							Count:  7,
              						},
              					},
              				},

@kerthcet kerthcet marked this pull request as ready for review January 13, 2025 08:09
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 13, 2025
@k8s-ci-robot k8s-ci-robot requested a review from mimowo January 13, 2025 08:09
Signed-off-by: kerthcet <[email protected]>
Signed-off-by: kerthcet <[email protected]>
@PBundyra
Copy link
Contributor

/lgtm
cc @mimowo

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jan 14, 2025
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 443aefcbe7ebde6a093ee0181c3a61236f54ce8c

@mimowo
Copy link
Contributor

mimowo commented Jan 14, 2025

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: kerthcet, mimowo

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 14, 2025
@mimowo
Copy link
Contributor

mimowo commented Jan 14, 2025

/cherry-pick release-0.10 release-0.9

@k8s-infra-cherrypick-robot
Copy link
Contributor

@mimowo: once the present PR merges, I will cherry-pick it on top of release-0.10 in a new PR and assign it to you.

In response to this:

/cherry-pick release-0.10 release-0.9

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot merged commit d638e2a into kubernetes-sigs:main Jan 14, 2025
17 checks passed
@k8s-ci-robot k8s-ci-robot added this to the v0.11 milestone Jan 14, 2025
@k8s-infra-cherrypick-robot
Copy link
Contributor

@mimowo: new pull request created: #3970

In response to this:

/cherry-pick release-0.10 release-0.9

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@mimowo
Copy link
Contributor

mimowo commented Jan 14, 2025

Proposal:
/release-note-edit

Fix building TAS assignments for workloads with multiple PodSets (eg. JobSet or kubeflow Jobs). The assignment was computed independently for the PodSets which could result in conflicts rendering the pods unschedulable by the kube-scheduler.

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed release-note-none Denotes a PR that doesn't merit a release note. labels Jan 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Discussion][TAS] Best effort placements for pods in lower tier of topology
5 participants